
A.9.4 Numerical and Related Functions
Number representation and numerical evaluation
Large integers and high-precision approximate numbers are stored as arrays of base
2216
TraditionalForm
or
2232
TraditionalForm
digits, depending on the lengths of machine integers.
Precision is internally maintained as a floating-point number.
IntegerDigits and related base conversion functions use a recursive divide-and-conquer algorithm.
N
uses an adaptive procedure to increase its internal working precision in
order to achieve whatever overall precision is requested.
Floor, Ceiling and related functions use an adaptive procedure similar to N to generate exact results from exact input.
Basic arithmetic
Multiplication of large integers and high-precision approximate numbers is
done using interleaved Karatsuba and FFT algorithms.
Integer powers are found by an algorithm based on Horner's rule.
Reciprocals and rational powers of approximate numbers use Newton's method.
Exact roots start from numerical estimates.
Significance arithmetic is used for all arithmetic with approximate numbers beyond machine precision.
Basic arithmetic uses approximately 400 pages of C source code.
Pseudorandom numbers
Random uses the Wolfram rule 30 cellular automaton generator for integers.
It uses a Marsaglia-Zaman subtract-with-borrow generator for real numbers.
Number-theoretical functions
GCD
uses the Jebelean-Weber accelerated GCD algorithm, together with a combination
of Euclid's algorithm and an algorithm based on iterative removal of powers
of 2.
PrimeQ
first tests for divisibility using small primes, then uses the Miller-Rabin
strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
As of 1997, this procedure is known to be correct only for
n<
TraditionalForm
, and it is conceivable that for larger it could claim a composite number to be prime.
The package NumberTheory`PrimeQ` contains a much slower algorithm which has been proved correct for all . It can return an explicit certificate of primality.
FactorInteger switches between removing small primes by trial division and using the Pollard , Pollard rho and quadratic sieve algorithm.
The package NumberTheory`FactorIntegerECM` contains an elliptic curve algorithm suitable for factoring some very large integers.
Prime and PrimePi use sparse caching and sieving. For large , the Lagarias-Miller-Odlyzko algorithm for PrimePi is used, based on asymptotic estimates of the density of primes, and is inverted to give Prime.
LatticeReduce uses the Lenstra-Lenstra-Lovasz lattice reduction algorithm.
To find a requested number of terms ContinuedFraction
uses a modification of Lehmer's indirect method, with a self-restarting divide-and-conquer
algorithm to reduce the numerical precision required at each step.
ContinuedFraction uses recurrence relations to find periodic continued fractions for quadratic irrationals.
FromContinuedFraction uses iterated matrix multiplication optimized by a divide-and-conquer method.
Combinatorial functions
Most combinatorial functions use sparse caching and recursion.
Factorial, Binomial and related functions use a divide-and-conquer algorithm to balance the number of digits in subproducts.
Fibonacci[n] uses an iterative method based on the binary digit sequence of n.
PartitionsP[n] uses Euler's pentagonal formula for small n, and the non-recursive Hardy-Ramanujan-Rademacher method for larger n.
ClebschGordan and related functions use generalized hypergeometric series.
Elementary transcendental functions
Exponential and trigonometric functions use Taylor series, stable recursion by argument doubling, and functional relations.
Log and inverse trigonometric functions use Taylor series and functional relations.
Mathematical constants
Values of constants are cached once computed.
Binary splitting is used to subdivide computations of constants.
Pi uses the Chudnovsky formula for computations up to ten million digits.
E is computed from its series expansion.
EulerGamma uses the Brent-McMillan algorithm.
Catalan is computed from a linearly convergent Ramanujan sum.
Special functions
For machine precision most special functions use Mathematica-derived rational minimax approximations. The notes that follow apply mainly to arbitrary precision.
Orthogonal polynomials use stable recursion formulas for polynomial cases and hypergeometric functions in general.
Gamma uses recursion, functional equations and the Binet asymptotic formula.
Incomplete gamma and beta functions use hypergeometric series and continued fractions.
PolyGamma uses Euler-Maclaurin summation, functional equations and recursion.
PolyLog uses Euler-Maclaurin summation, expansions in terms of incomplete gamma functions and numerical quadrature.
Zeta
and related functions use Euler-Maclaurin summation and functional equations.
Near the critical strip they also use the Riemann-Siegel formula.
StieltjesGamma uses Keiper's algorithm based on numerical quadrature of an integral representation of the zeta function.
The error function and functions related to exponential integrals are all evaluated using incomplete gamma functions.
The inverse error functions use binomial search and a high-order generalized Newton's method.
Bessel functions use series and asymptotic expansions. For integer orders, some also use stable forward recursion.
The hypergeometric functions use functional equations, stable recurrence
relations, series expansions and asymptotic series. Methods from NSum and NIntegrate are also sometimes used.
ProductLog uses high-order Newton's method starting from rational approximations and asymptotic expansions.
Elliptic integrals are evaluated using the descending Gauss transformation.
Elliptic theta functions use series summation with recursive evaluation of series terms.
Other elliptic functions mostly use arithmetic-geometric mean methods.
Mathieu functions use Fourier series. The Mathieu characteristic functions use generalizations of Blanch's Newton method.
Numerical integration
With Method->Automatic, NIntegrate uses GaussKronrod in one dimension, and MultiDimensional otherwise.
If an explicit setting for MaxPoints is given, NIntegrate by default uses Method->QuasiMonteCarlo.
GaussKronrod: adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
DoubleExponential: non-adaptive double-exponential quadrature.
Trapezoidal: elementary trapezoidal method.
Oscillatory: transformation to handle integrals containing trigonometric and Bessel functions.
MultiDimensional: adaptive Genz-Malik algorithm.
MonteCarlo: non-adaptive Monte Carlo.
QuasiMonteCarlo: non-adaptive Halton-Hammersley-Wozniakowski algorithm.
Numerical sums and products
If the ratio test does not give 1, the Wynn epsilon algorithm is applied to a sequence of partial sums or products.
Otherwise Euler-Maclaurin summation is used with Integrate or NIntegrate.
Numerical differential equations
With Method->Automatic, NDSolve switches between a non-stiff Adams method and a stiff Gear method. Based on LSODE.
Adams: implicit Adams method with order between 1 and 12.
Gear: backward difference formula method with order between 1 and 5.
RungeKutta: Fehlberg order 4-5 Runge-Kutta method for non-stiff equations.
For linear boundary value problems the Gel'fand-Lokutsiyevskii chasing method is used.
For 1+1-dimensional PDEs the method of lines is used.
The code for NDSolve is about 500 pages long.
Approximate equation solving and minimization
Polynomial root finding is done based on the Jenkins-Traub algorithm.
For sparse linear systems, Solve and NSolve
use several efficient numerical methods, mostly based on Gauss factoring
with Markowitz products (approximately 250 pages of code).
For systems of algebraic equations, NSolve
computes a numerical Gröbner basis using an efficient monomial ordering,
then uses eigensystem methods to extract numerical roots.
FindRoot uses a damped Newton's method, the secant method and Brent's method.
With Method->Automatic, FindMinimum
uses various methods due to Brent: the conjugate gradient in one dimension,
and a modification of Powell's method in several dimensions.
If the function to be minimized is a sum of squares, FindMinimum uses the Levenberg-Marquardt method (Method->LevenbergMarquardt).
With Method->Newton FindMinimum uses Newton's method. With Method->QuasiNewton FindMinimum uses the BFGS version of the quasi-Newton method.
ConstrainedMax and related functions use an enhanced version of the simplex algorithm.
Data manipulation
Fourier
uses the FFT algorithm with decomposition of the length into prime factors.
When the prime factors are large, fast convolution methods are used to maintain
asymptotic complexity.
For real input, Fourier uses a real transform method.
ListConvolve and ListCorrelate use FFT algorithms when possible. For exact integer inputs, enough digits are computed to deduce exact integer results.
InterpolatingFunction uses divided differences to construct Lagrange or Hermite interpolating polynomials.
Fit works by computing the product of the response vector with the pseudoinverse of the design matrix.
Approximate numerical linear algebra
Machine-precision matrices are typically converted to a special internal representation for processing.
Algorithms similar to those of LINPACK, EISPACK and LAPACK are used when appropriate.
LUDecomposition, Inverse, RowReduce and Det use Gaussian elimination with partial pivoting. LinearSolve uses the same methods, together with iterative improvement for high-precision numbers.
SingularValues uses the QR algorithm with Givens rotations. PseudoInverse and NullSpace are based on SingularValues.
QRDecomposition uses Householder transformations.
SchurDecomposition uses QR iteration.
MatrixExp uses Schur decomposition.
Exact numerical linear algebra
Inverse and LinearSolve use efficient row reduction based on numerical approximation.
With Modulus->n, modular Gaussian elimination is used.
Det uses modular methods and row reduction, constructing a result using the Chinese Remainder Theorem.
Eigenvalues works by interpolating the characteristic polynomial.
MatrixExp uses Putzer's method or Jordan decomposition.
|